Saturday, September 27, 2025

Episode #131: The Unseen Bias: What AI Image Generators Are Hiding About Disability

Discover the shocking truth about AI disability bias in image generators. This experiment reveals how AI misrepresents and stereotypes, highlighting the urgent need for inclusive AI development and authentic disability representation AI.

The Water Prairie Chronicles Podcast airs new episodes every Friday!

Find the full directory at waterprairie.com/listen.

Show Notes:
Why is AI erasing and stereotyping disabled lives? Unpack the data bias threatening inclusion.

This Disability Pride Month, join us on the Water Prairie Chronicles as we unveil a critical flaw in the technology shaping our world: AI image generators. Host Tonya Wollum conducted an eye-opening experiment, running detailed prompts describing individuals with physical disabilities through seven different popular AI platforms, including ChatGPT, Grok, and Artistly.

The results are shocking. From misrepresenting white canes and guide dog harnesses to completely failing to understand manual wheelchairs and forearm crutches, AI consistently struggled to depict disability accurately and respectfully. Worse, it often perpetuated harmful stereotypes or outright failed to generate images.

This isn’t just about bad images; it’s about a deep-seated disability bias in AI training data. When AI learns from incomplete or stereotypical online information, it actively undermines the message that “We Belong Here.” This episode dives into:

  • Specific examples of how major AI platforms fail to represent disabled individuals.
  • The ethical implications of AI perpetuating inaccurate and harmful stereotypes.
  • Why the disability community’s direct involvement is crucial for training and development.
  • Practical steps we can take to advocate for truly inclusive AI.

Tune in to understand the urgent need for diverse, authentic representation in AI, and let’s work together to ensure AI truly sees and values everyone.

Relevant Links:

Important Segments (with Timestamps):

  • [00:16:04] Call to Action: Share Your Thoughts & Spread the Word
  • [00:00:00] Welcome & The Experiment’s Shocking Discovery
  • [00:01:34] Prompt 1: The Blind Girl & The White Cane (Failures & Stereotypes)
  • [00:05:10] Prompt 2: The Blind Man & The Guide Dog (Universal Failure)
  • [00:08:36] Prompt 3: The Child & The Manual Wheelchair (Inconsistencies & Deformities)
  • [00:11:34] Prompt 4: The Girl with Forearm Crutches & AFOs (Most Bizarre Results & AI Admission of Failure)
  • [00:14:35] Why This Matters: The Deep Issue of Bias in AI Training Data
  • [00:15:26] The Solution: Involving the Disability Community in AI Development

📰 Are you getting our newsletter? If not, subscribe at https://waterprairie.com/newsletter

👉 Support our podcast and help us share more incredible stories by making a donation at Buy Me A Coffee. Your contribution makes a significant impact in bringing these stories to light. Thank you for your support!

Music Used:

“LazyDay” by Audionautix is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/

Artist: http://audionautix.com/


Tonya Wollum is an IEP Coach, podcast host, and disability advocate. She works one-on-one with parents to guide them to a peaceful partnership with their child’s IEP team, and she provides virtual mentors for special needs parents through the interviews she presents as the host of the Water Prairie Chronicles podcast. Tonya knows firsthand how difficult it is to know how to support your special needs child, and she seeks to provide knowledge to parents and caregivers as well as to those who support a family living life with a disability. She’s doing her part to help create a more inclusive world where we can celebrate what makes each person unique!


Episode #131: The Unseen Bias: What AI Image Generators Are Hiding About Disability

Why is AI erasing and stereotyping disabled lives? Unpack the data bias threatening inclusion.

(Recorded July 25, 2025)

Full Transcript of Episode 131:

Hey everyone, happy Disability Pride Month!

In this episode of the Water Prairie Chronicles, I wanted to celebrate the diversity of the disability community. To do this, I’ve been running an experiment that I had hoped would turn out positive, but instead has revealed a major flaw in a technology shaping our world.

That’s AI Image Generators. The results are pretty shocking and frankly unacceptable. We’re seeing AI used everywhere from marketing to social media, creating visuals at lightning speed. But what happens when AI tries to represent disabilities? Does it get it right? Does it show the true diversity and reality of disabled lives?

I wanted to find out, so I crafted four very specific image prompts, the directions that AI uses to create images. Each describing an individual with a physical disability using a common mobility aid. I then ran each of them through seven different popular AI image platforms from Artistly to Grok to ChatGPT.

My goal was simple: test AI’s visual understanding of disability.

While I’m pulling up the slides to show you, go ahead and hit that like button and subscribe to Water Prairie for more videos about parenting children with disabilities and how to support special needs families. And if you’re listening to the podcast, leave a review on Apple to help more families find this information.

Okay, I’m gonna be working through a slideshow for the rest of this episode, so those of you that are watching on YouTube can see some of the results that I found. If you’re listening on another platform, I’ll link the video in the show notes along with the full blog post if you’d like to see more about this experiment. Now let’s look at what happened because the results speak for themselves.

For the first prompt, I wanted to see if it could create a blind girl that’s using a white cane. My main goal was to see, does it understand what a white cane is. So you can see on this that it’s a pretty extensive prompt that I gave. A lot of this, if you’re not familiar with prompt writing, it is just giving it as much information as possible for it to be able to create the scene that I want.

So it has the type of photo I want, a photorealistic candid street photograph. A blind teenage girl. So it should understand what blind means, but we’ll see on the outcome here. I gave the age of the girl I described the hair. I have in there that she’s walking independently on a bustling city sidewalk.

She holds a white cane with a red tip in her right hand. Extended forward, sweeping the ground in a realistic, purposeful manner, indicating obstacle detection. If you’re not familiar with someone who’s using a white cane, there are a couple different types of canes and they typically are white.

Whichever one you’re using. But the particular one that, that I’m looking for on this that has, the red tip usually has either a roller ball or a small tip on it, and you sweep it back and forth as you’re going. We’re looking for that type of cane that can be used for mobility itself. Notice on the prompts that I also gave it near the end there, we have details to emphasize here.

So I wanted accurate cane grip, realistic sweeping motion, confident facial expression, and typical casual clean teen clothing. And then I wanted it to avoid things. So each of these have avoidances that I wanted to make sure that we’re staying away from. So I did not want an overly dramatic or an inspirational pose.

This is what a typical white cane with red tip would look like. It’s white, it has a black candle. Sometimes there’ll be different colors because they make different colors Now. Um, especially for the kids who, who wanna, um, show their own style, but a traditional one, and that’s what I’d ask for is white and it has the red down here and there’s a tip on it that allows them to able to either tap or sweep back and forth in front of them to find if there’s an obstacle nearby.

So it usually is sweeping back and forth. Alright, so let’s see what the results were. This is an actual photograph that I got off of Canva, so it’s not AI generated to my knowledge. So this first one, um, is showing that instead of a white cane, they’re using this. Cane that has red and white on it as a walking stick.

Her hand doesn’t look like it’s truly holding it, but it’s definitely not a long white cane and it definitely is not sweeping in front of her because her foot is in front of the cane. This one almost looks like she’s fighting in a sword fight or something. Um, she’s poking it out in front of her and yes, she’s walking confidently her heads up and all, but this is not how a white cane would be used.

And this one was. I thought a little more harmful because it’s put a blindfold on the teen, and to me that showed more of a stereotype of someone who’s blind, who is going to have very dark glasses or a blindfold on. I don’t think this AI image generator understood what it meant to be blind, but I will say the person holding the cane appears to be holding it correctly.

And they do have the right cane, so they got part of it, right. If it just, if they didn’t have the blindfold, it would’ve been better. And then this one I put in, because some of the AI has just completely missed the function of a white cane. This particular one is, to me, it’s not just a minor mistake, it’s a fundamental misunderstanding of how a blind person would use this essential mobility aid.

They have what appears might be a white cane down here in this hand is not being gripped correctly, but this looks almost like a hockey stick walking stick that they’re using. So I don’t know what what this one is.

So let’s move on to prompt number two. Prompt two was a blind man, so we’re staying in the blind and visually impaired field again, but he’s using a guide dog this time, and I have him as an older person in their late thirties.

They’re using the dog in an office building. So this would be a typical scenario if you are blind and you’re using a guide dog. So I wanted it to be professional. The dog is wearing a clearly visible fitted guide dog harness. With the rigid handle and its posture indicates focus and leading. So we’re looking for a guide dog who’s been trained and is working at this moment in this picture, and then it says the man’s left hand is on the harness handle and his eyes are looking generally forward, so the man’s walking.

Now sometimes someone may use a cane with a guide dog. Or without. I was okay. Either way that might show up. But we were looking for the guide dog to be in his left hand using a rigid harness, and I’ll show you what that looks like in just a minute. But I wanted them to emphasize accurate guide, dog harness, and working posture.

The man’s calm and independent demeanor and realistic interaction between the man and the dog. And then I want ’em to avoid the dog acting as a pet, an ill-fitting or missing harness, the man seemingly disoriented or helpless, or an overly sterile or empty environment. So it needed to be in a typical situation.

So a working guide dog should be shown with a rigid harness, and the handler may also have a longer leash that’s attached when the dog needs to relieve itself or is off duty. In this picture, you don’t see the longer leash. Sometimes you’ll have a leash that’s gripped in the hand. That’s attached to the harness as well.

So this is what we’re looking for, this type of harness. It might didn’t have to be red, it could be different colors. This is just an example of what a guide dog might look like. So what we got before I talk about this picture. This particular scenario was almost a universal failure.

Not a single platform consistently generated a realistic guide dog harness with a rigid handle. A lot of ’em gave me leashes and pet harnesses. Again, if you remember back, we were saying we’re not going to use that part. That was the part to avoid, but I got those anyway, so this one I thought was really strange. He’s holding a leash.

Again, it’s in his right hand. We’re not gonna get too technical on that one. We had said left hand, but the leash is coming into the back of what looks like it could have been a harness, but I have no idea what the stick is that is coming out of his side and the man is looking forward, but there’s no interaction between the man and the dog.

 It’s obviously not a guide dog who is working right now. This one we have, the dog has three legs, so there’s a leg that’s missing. A lot of times with AI images, you will see extremities that are deformed. Sometimes they have extra fingers, extra toes if you’re looking at humans. And so I wasn’t surprised that at least one of the images showed up with the wrong number of legs.

But this one was interesting because the man has his hands inside his pockets. And the leash may be inside his pocket too, I’m not sure. But he has this leash that’s coming here to the back of the harness, but it’s also coming down around underneath him. So this and, and the dog, if you’ll look at him, he’s looking off to the side.

His head’s down. He’s definitely not alert. And the man has no connection with the dog at all. So this, this one surprised me at how much it did. None of the platforms truly understood what a working guide dog would look like.

All right, so prompt three. This one I thought would be an easy one to do.

Because ambulatory wheelchairs have been around for ages, so that should be something that’s in the database for them to pull from. So I wanted to see a young child using an ambulatory wheelchair, so not an electric wheelchair or a scooter, but an actual wheelchair with the two larger wheels that you’re gonna propel by rolling them.

And, um, I wanted the mother and son to be there. I wanted there to be a interaction between them. I did not want the mother pushing the child. And, um, I asked for the child to be around four to five years old. Might be unrealistic to have them propelling themselves, but I thought we’d see what we could get here.

So it has the boy actively propelling himself with his hands on the, the rims of the wheels and looking up at his mother with a smile. The mother has her hand gently on the back of his chair. But not pushing him. The details to emphasize realistic manual, wheelchair, the boy’s active engagement and propelling, that’s important to remember.

Avoid, um, you didn’t want a medicalized setting, the child appearing passive or sad, or the wheelchair appearing futuristic or broken. Keep that in mind, or the mother pushing him exclusively. So let’s see what we should have found. These are two children who are using the type of wheelchair that I was expecting to find.

These are, again, are photos off of Canva. Here’s one that we have. Notice that the mother is actually pushing the boy, the boy’s looking up, she’s behind them, so we can’t really look at her, but they got part of the scene. Right. What I wanted to note here was these handles that the boys holding onto. I don’t know about you, but I’ve never seen an ambulatory wheelchair that has a handle there that you would hold onto that would actually propel you.

So even if the mother wasn’t pushing, he was supposed to be pushing it himself. This one failed. If you wanna know where these images are coming from, if you look at the, the blog post, I have every single image broken down by which generator made it. So you can get an idea of which one is producing better ones to be able to work with.

And then this one was interesting because it had not really extra wheels, but it had wheels in very strange places. So the rear wheels are in front and in back, the front wheels are over to the side. And this poor child can do nothing, but it looks like maybe go around in circles. But he is looking up at the mother.

They’re on a paved path. So, so we, we got part of the prompt, right? The important part is what was wrong? Then this one was a little bit disturbing. The wheelchair was correct. The mother’s to the side, the boy’s looking toward the mother. But notice the boy has three knees here, and then he has a foot that’s back behind the chair and no feet in the front.

So this one. Got the wheelchair fairly correct. It looks from what, what I’m looking here, but missed some major parts of the, the human elements here, while some of the platforms did okay, the inconsistencies prove that accurate functional representation is not guaranteed with this.

So let’s look at our last one. This prompt was a young girl with arm crutches and orthopedic braces, but they aren’t. Underarm crutches that I was asking for. I was asking for the forearm crutches. So we were looking for a photo of a nine to 10-year-old girl. So it says she’s using two forearm crutches and they’re Lofstrand crutches, holding them correctly with their arms inserted into the cuffs and hands on the grips.

I’ll show you a picture in just a minute if you’re not sure what this is. Her legs are visible and she’s wearing realistic ankle-foot orthopedic braces, or AFOs, under or over her clothing, providing support. The part that I wanted to emphasize on this was the correct use and appearance of the forearm crutches, realistic and functional ankle-foot orthoses, and the girl being independent and engaged in movement, and I wanted to avoid crutches, appearing to be decorative or incorrectly used to keep that part in mind.

The braces looking futuristic or ill-fitting. And the girl looking distressed or isolated or overly dramatic. So let’s see what we should have found. This is an example of the type of crutch I was looking for. This particular girl in the image has a limb difference, so she’s missing one of her legs, um, and is using the crutches for that.

But what I was looking for was someone that is using the ankle-foot orthotic braces, or the AFOs that would look something like this. And using the crutches just for additional support in their walking. But this gives you an idea of, of what we’d be looking for. So this is what we found. This one was a big struggle.

 I asked forearm crutches a specific type. Instead, AI’s gave me underarm crutches. We had a lot of underarm crutches showing up on these. This one particularly has the underarm crutch, but notice that, that her hand’s holding a handle and she’s got that forearm piece. That looks like it tried to make something like a forearm crutch, but it didn’t seem to understand what it was trying to make.

In this one, it looks more like walking sticks, and in this one it was just a bizarre combination of both. You’ve got all these bands on her arms with her hands on, these almost walking sticks ’cause they don’t seem to extend up beyond the hand. She’s got underarm crutches, and she’s got all these braces on her legs.

This one was the most bizarre of all the responses that I had. And then for the AFOs or the ankle foot orthoses, they looked more like on this one, ski boots, I thought, on this one, knee pads. And on this one, just some random bands. And if you notice on the, on the crutches, I, I hate to even use that word on this one.

It looks like they’re ski poles, but they almost, it is like they’re going into her backpack somehow. So that one was just kind of a really weird one. And then finally, this AI even flat out, couldn’t generate an image for this.

And the response is, “I’m still learning how to generate certain kinds of images, so I might not be able to create exactly what you’re looking for yet, or it might go against my guidelines. If you’d like to ask for something else, just let me know.”

At least it admitted that. I don’t know what it is. I’m still learning. I can’t do this yet. This actually came from Google Gemini.

So why does this matter beyond just making bad images? These failures reflect a deeper, more troubling issue.

 There is bias in AI training data.

AI learns from a vast amount of images and text available that’s online, and if that data is incomplete, stereotypical, or lacks authentic representation of disability, the AI will simply perpetuate those inaccuracies.

For Disability Pride month this year, the theme is we belong here and we’re here to stay.

When AI generates images that misrepresent, erase, or stereotype disabled individuals, it actively undermines this message. It suggests that AI doesn’t understand or value the presence of people with disabilities in the world, or at least that its understanding is severely limited.

This is an ethical problem. It impacts visibility, understanding, and inclusion. So what’s the solution? The good news is we can fix this, but it requires action. My strong suggestion and the core message of my deep dive article is this.

The disability community must be directly involved in AI training.

We need to provide more diverse and accurate training data.

We need to consult on AI development from the ground up, and we need to help define what accurate and respectful representation truly means.

I’ve documented all my findings, including specific examples from each AI platform and a detailed blog post on my website. You’ll see exactly how each AI performed and get even more insights into why this happened.

Click on the link in the show notes to read the full article, and I wanna hear from you.

Have you seen similar AI fails? What are your thoughts on this?

Let me know in the comments and share this video to spread the word.

Let’s make sure that by the next Disability Pride Month, AI truly sees and represents everyone.

If you wanna learn more about how AI can be used to help with communication and building relationships, check out my interview with Peter Fitzpatrick from Fawn Friends. I’ll put the link in the show notes for you. Thanks for watching, and I’ll see you next time.


Frequently Asked Questions (FAQ):

Q1: What kind of AI image generators did you test in your experiment?

A1: I tested seven popular AI image platforms, including Artistly, Grok, and ChatGPT, using the same detailed prompts across all of them.

Q2: Why is it a problem if AI images of people with disabilities are inaccurate or stereotypical?

A2: Inaccurate or stereotypical AI images are a major problem because they perpetuate misconceptions, undermine the visibility and true representation of disabled individuals, and can reinforce harmful biases in how society views disability. This goes against the message of belonging and inclusion.

Q3: What specific issues did you find with AI-generated images of people with disabilities?

A3: I found issues such as incorrect use of mobility aids (e.g., white canes used like walking sticks, guide dogs without proper harnesses), deformities in human anatomy (e.g., extra limbs), and the reinforcement of outdated stereotypes (e.g., blindfolds on blind individuals). Some AI even failed to generate images at all.

Q4: What causes this bias in AI image generators?

A4: The bias stems from the data AI models are trained on. If the vast amount of images and text online used for training is incomplete, stereotypical, or lacks authentic representation of disability, the AI will simply perpetuate those inaccuracies.

Q5: What’s the solution to fixing AI disability bias? A5: The key solution is direct involvement of the disability community in AI training and development. This includes providing more diverse and accurate training data, consulting on AI development from the ground up, and helping define what accurate and respectful representation truly means.

Tonya Wollum

Tonya Wollum

Tonya Wollum, host of the Water Prairie Chronicles podcast, is a Master IEP Coach® & content creator supporting parents of children with disabilities.

Leave a Reply

Your email address will not be published. Required fields are marked *